78 research outputs found
A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images
Predictive coding is attractive for compression onboard of spacecrafts thanks
to its low computational complexity, modest memory requirements and the ability
to accurately control quality on a pixel-by-pixel basis. Traditionally,
predictive compression focused on the lossless and near-lossless modes of
operation where the maximum error can be bounded but the rate of the compressed
image is variable. Rate control is considered a challenging problem for
predictive encoders due to the dependencies between quantization and prediction
in the feedback loop, and the lack of a signal representation that packs the
signal's energy into few coefficients. In this paper, we show that it is
possible to design a rate control scheme intended for onboard implementation.
In particular, we propose a general framework to select quantizers in each
spatial and spectral region of an image so as to achieve the desired target
rate while minimizing distortion. The rate control algorithm allows to achieve
lossy, near-lossless compression, and any in-between type of compression, e.g.,
lossy compression with a near-lossless constraint. While this framework is
independent of the specific predictor used, in order to show its performance,
in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless
compression standard, obtaining an extension that allows to perform lossless,
near-lossless and lossy compression in a single package. We show that the rate
controller has excellent performance in terms of accuracy in the output rate,
rate-distortion characteristics and is extremely competitive with respect to
state-of-the-art transform coding
Compressive Signal Processing with Circulant Sensing Matrices
Compressive sensing achieves effective dimensionality reduction of signals,
under a sparsity constraint, by means of a small number of random measurements
acquired through a sensing matrix. In a signal processing system, the problem
arises of processing the random projections directly, without first
reconstructing the signal. In this paper, we show that circulant sensing
matrices allow to perform a variety of classical signal processing tasks such
as filtering, interpolation, registration, transforms, and so forth, directly
in the compressed domain and in an exact fashion, \emph{i.e.}, without relying
on estimators as proposed in the existing literature. The advantage of the
techniques presented in this paper is to enable direct
measurement-to-measurement transformations, without the need of costly recovery
procedures
Graded quantization for multiple description coding of compressive measurements
Compressed sensing (CS) is an emerging paradigm for acquisition of compressed
representations of a sparse signal. Its low complexity is appealing for
resource-constrained scenarios like sensor networks. However, such scenarios
are often coupled with unreliable communication channels and providing robust
transmission of the acquired data to a receiver is an issue. Multiple
description coding (MDC) effectively combats channel losses for systems without
feedback, thus raising the interest in developing MDC methods explicitly
designed for the CS framework, and exploiting its properties. We propose a
method called Graded Quantization (CS-GQ) that leverages the democratic
property of compressive measurements to effectively implement MDC, and we
provide methods to optimize its performance. A novel decoding algorithm based
on the alternating directions method of multipliers is derived to reconstruct
signals from a limited number of received descriptions. Simulations are
performed to assess the performance of CS-GQ against other methods in presence
of packet losses. The proposed method is successful at providing robust coding
of CS measurements and outperforms other schemes for the considered test
metrics
Image Denoising with Graph-Convolutional Neural Networks
Recovering an image from a noisy observation is a key problem in signal
processing. Recently, it has been shown that data-driven approaches employing
convolutional neural networks can outperform classical model-based techniques,
because they can capture more powerful and discriminative features. However,
since these methods are based on convolutional operations, they are only
capable of exploiting local similarities without taking into account non-local
self-similarities. In this paper we propose a convolutional neural network that
employs graph-convolutional layers in order to exploit both local and non-local
similarities. The graph-convolutional layers dynamically construct
neighborhoods in the feature space to detect latent correlations in the feature
maps produced by the hidden layers. The experimental results show that the
proposed architecture outperforms classical convolutional neural networks for
the denoising task.Comment: IEEE International Conference on Image Processing (ICIP) 201
Sampling of graph signals via randomized local aggregations
Sampling of signals defined over the nodes of a graph is one of the crucial
problems in graph signal processing. While in classical signal processing
sampling is a well defined operation, when we consider a graph signal many new
challenges arise and defining an efficient sampling strategy is not
straightforward. Recently, several works have addressed this problem. The most
common techniques select a subset of nodes to reconstruct the entire signal.
However, such methods often require the knowledge of the signal support and the
computation of the sparsity basis before sampling. Instead, in this paper we
propose a new approach to this issue. We introduce a novel technique that
combines localized sampling with compressed sensing. We first choose a subset
of nodes and then, for each node of the subset, we compute random linear
combinations of signal coefficients localized at the node itself and its
neighborhood. The proposed method provides theoretical guarantees in terms of
reconstruction and stability to noise for any graph and any orthonormal basis,
even when the support is not known.Comment: IEEE Transactions on Signal and Information Processing over Networks,
201
Joint recovery algorithms using difference of innovations for distributed compressed sensing
Distributed compressed sensing is concerned with representing an ensemble of
jointly sparse signals using as few linear measurements as possible. Two novel
joint reconstruction algorithms for distributed compressed sensing are
presented in this paper. These algorithms are based on the idea of using one of
the signals as side information; this allows to exploit joint sparsity in a
more effective way with respect to existing schemes. They provide gains in
reconstruction quality, especially when the nodes acquire few measurements, so
that the system is able to operate with fewer measurements than is required by
other existing schemes. We show that the algorithms achieve better performance
with respect to the state-of-the-art.Comment: Conference Record of the Forty Seventh Asilomar Conference on
Signals, Systems and Computers (ASILOMAR), 201
Super-resolved multi-temporal segmentation with deep permutation-invariant networks
Multi-image super-resolution from multi-temporal satellite acquisitions of a
scene has recently enjoyed great success thanks to new deep learning models. In
this paper, we go beyond classic image reconstruction at a higher resolution by
studying a super-resolved inference problem, namely semantic segmentation at a
spatial resolution higher than the one of sensing platform. We expand upon
recently proposed models exploiting temporal permutation invariance with a
multi-resolution fusion module able to infer the rich semantic information
needed by the segmentation task. The model presented in this paper has recently
won the AI4EO challenge on Enhanced Sentinel 2 Agriculture.Comment: IGARSS 202
Fast and Lightweight Rate Control for Onboard Predictive Coding of Hyperspectral Images
Predictive coding is attractive for compression of hyperspecral images
onboard of spacecrafts in light of the excellent rate-distortion performance
and low complexity of recent schemes. In this letter we propose a rate control
algorithm and integrate it in a lossy extension to the CCSDS-123 lossless
compression recommendation. The proposed rate algorithm overhauls our previous
scheme by being orders of magnitude faster and simpler to implement, while
still providing the same accuracy in terms of output rate and comparable or
better image quality
Binary Adaptive Embeddings from Order Statistics of Random Projections
We use some of the largest order statistics of the random projections of a
reference signal to construct a binary embedding that is adapted to signals
correlated with such signal. The embedding is characterized from the analytical
standpoint and shown to provide improved performance on tasks such as
classification in a reduced-dimensionality space
Image dequantization for hyperspectral lossy compression with convolutional neural networks
Significant work has been devoted to methods based on predictive coding for onboard compression of hyperspectral images. This is supported by the new CCSDS 123.0-B-2 recommendation for lossless and near-lossless compression. While lossless compression can achieve high throughput, it can only achieve limited compression ratios. The introduction of a quantizer and local decoder in the prediction loop allows to implement lossy compression with good rate-performance. However, the need to have a locally decoded version of a causal neighborhood of the current pixel under coding is a significant limiting factor in the throughput such encoder can achieve. In this work, we study the rate-distortion performance of a significantly simpler and faster onboard compressor based on prequantizing the pixels of the hyperspectral image and applying a lossless compressor (such as the lossless CCSDS CCSDS 123.0-B-2) to the quantized pixels. While this is suboptimal in terms of rate-distortion performance compared to having an in-loop quantizer, we compensate the lower quality with an on-ground post-processor based on modeling the distortion residual with a convolutional neural network. The task of the neural network is to learn the statistics of the quantization error and apply a dequantization model to restore the image
- …